Skip to Content
Upcoming action-learning journey: Accra, Ghana @ May 18, 2025

The Ranking Process: A Multi-Perspectival Approach to Valuing Contribution

This page details the rigorous, transparent, and participatory methodology used to arrive at the final individual and team rankings for the Action Learning Journey (ALJ). Our process is not a simple scoring exercise; it is a synthesis of diverse, multi-perspectival data designed to honor our core principle of “evaluation as valuing”.

The final rankings are a direct reflection of the values and priorities co-created with the cohort, moving beyond static metrics to capture the dynamic flow of contribution, growth, and collaboration.

Guiding Principles of Our Ranking Methodology

Our ranking process is guided by three core principles to ensure fairness, depth, and relevance:

  1. Co-Created Framework: The criteria and their relative importance were not imposed. They were established with the participants during the “Valuing Workshop,” where the cohort collectively decided on the key value areas and their weights5.

  2. Multi-Perspectival Data: Rankings are never based on a single source of truth. We synthesize data from self-evaluations, peer evaluations, facilitator observations, structured forms, and unstructured timelining entries to create a holistic and reliable picture6.

  3. Evidence-Based & Transparent: Every step of the process, from data collection to final calculation, is documented and rooted in tangible evidence. The goal is to make the “how” of the ranking as clear and understandable as the “what.”

Part 1: The Individual Ranking Process

The individual rankings were designed to recognize and reward participants based on a composite understanding of their journey, combining self-reflection, peer recognition, and qualitative contributions.

Data Sources for Individual Ranking

Three primary data streams were integrated to calculate each participant’s final score6:

  1. Self-Evaluation Forms: Participants completed daily forms where they rated their own performance on a scale of 1-5 across the four co-created value areas (Social Relationality, Learning & Application, Creativity, and Productivity). Each rating required a minimum one-sentence qualitative example to provide context.

  2. Peer-Evaluation Forms: Participants also ranked their peers in order of perceived contribution. This provided a crucial relational perspective on who was creating the most value for others.

  3. Qualitative “Power-Up” Points: We analyzed all qualitative data—including comments on forms and messages on the Telegram timelining channel—to identify instances where participants were mentioned as being particularly helpful or insightful. These mentions were converted into “power-up” points, rewarding the often-invisible labor of community support and knowledge sharing.

Scoring Mechanism & Synthesis

The final individual score was a composite, calculated as follows:

  1. Self-Evaluation Score: The average of a participant’s daily self-ratings.

  2. Peer-Evaluation Score: A score derived from the participant’s average rank in peer evaluations.

  3. Power-Up Score: The total number of points accrued from positive qualitative mentions.

These three scores were then normalized and combined to produce a final, holistic individual ranking that balanced self-perception with community recognition.

[Insert Visualization: Individual Score Breakdown Bar Chart - Showing Self, Peer, and Power-Up components for each top participant]

Part 2: The Team Ranking Process

The team rankings were determined through an even more extensive synthesis process, designed to evaluate each team’s collective growth and output across the three strategic trajectories co-defined for the ALJ.

The Core Evaluation Areas & Co-Created Weights

Based on the “Valuing Workshop,” all team activities and contributions were assessed against three core areas, each with a specific, co-created weight that reflected the cohort’s priorities5:

  • Regenerative Growth (3x weight): Focused on systems thinking, community impact, and ecological consciousness.

  • Team Potential (2x weight): Assessed leadership emergence, collaboration quality, and collective capacity building.

  • Business Viability (1x weight): Evaluated market understanding, technical execution, and long-term sustainability planning.

(Note: These weights were an evolution from an earlier 40%/30%/30% model, adapted live to better reflect the journey’s focus on social and learning dynamics over pure business output.)

The Synthesis Process: From Raw Data to Actionable Insight

This is where the full power of our evaluation stack was deployed.

  1. Multi-Source Data Integration: All relevant data—including all form responses, timelining entries, voice notes, and specific schema contributions—was aggregated into our Neo4j graph database. This created a rich, interconnected dataset linking every contribution to a person, team, time, and theme.

  2. AI-Powered Analysis with GraphRAG: We used our GraphRAG tool to perform deep, contextual analysis of the aggregated data10. This allowed us to move beyond simple keyword searches and ask complex, relational questions, such as:

    • “Which team demonstrated the biggest positive shift in ‘collaboration’ according to peer ratings, and did their voice notes on the timeline reflect this change?”

    • “Synthesize all contributions tagged with #creativity for ‘Bars on Bars’ and compare their originality to the solutions proposed by ‘Bins to Cash’.”

  3. Quantitative & Statistical Analysis: Alongside the qualitative analysis, we ran statistical analyses on the structured form data to generate:

    • Score Distributions: To see the range and average of team scores in each value area.

    • Growth Trajectory Graphs: To visualize how each team’s performance evolved over the two weeks.

    • Comparative Heatmaps: To show at a glance which teams excelled in which specific areas.

[Insert Visualization: Team Performance Radar Chart - Showing each team's final score across the three weighted areas]
[Insert Visualization: Team Growth Trajectory Line Graph - Plotting each team's score over the duration of the ALJ]

Final Calculation & Ranking

The final team ranking was the direct output of this rigorous synthesis:

  1. A normalized score (0-10) was generated for each team in each of the three core areas, based on the comprehensive data analysis.

  2. Each score was multiplied by its co-created weight (Regenerative Growth x3, Team Potential x2, Business Viability x1).

  3. The weighted scores were summed to produce a final total score for each team.

  4. Teams were then ranked according to this final, transparent, and evidence-backed score.

[Insert Visualization: Final Team Ranking Table with Weighted Score Breakdown]

From Rankings to Resource Allocation

Crucially, this entire process was not just an academic exercise. The final, weighted rankings were used to directly inform the distribution of the prize money, ensuring that resources flowed toward the teams that demonstrated the most value according to the very criteria the community itself had established.

By integrating co-created values, multi-source data, and advanced analytical tools, our ranking process provides a robust and defensible methodology for understanding and rewarding contribution in complex, emergent systems.

Last updated on